passive agent
Offline vs. Online Learning in Model-based RL: Lessons for Data Collection Strategies
Chen, Jiaqi, Shi, Ji, Sancaktar, Cansu, Frey, Jonas, Martius, Georg
Data collection is crucial for learning robust world models in model-based reinforcement learning. The most prevalent strategies are to actively collect trajectories by interacting with the environment during online training or training on offline datasets. At first glance, the nature of learning task-agnostic environment dynamics makes world models a good candidate for effective offline training. However, the effects of online vs. offline data on world models and thus on the resulting task performance have not been thoroughly studied in the literature. In this work, we investigate both paradigms in model-based settings, conducting experiments on 31 different environments. First, we showcase that online agents outperform their offline counterparts. We identify a key challenge behind performance degradation of offline agents: encountering Out-Of-Distribution states at test time. This issue arises because, without the self-correction mechanism in online agents, offline datasets with limited state space coverage induce a mismatch between the agent's imagination and real rollouts, compromising policy training. We demonstrate that this issue can be mitigated by allowing for additional online interactions in a fixed or adaptive schedule, restoring the performance of online training with limited interaction data. We also showcase that incorporating exploration data helps mitigate the performance degradation of offline agents. Based on our insights, we recommend adding exploration data when collecting large datasets, as current efforts predominantly focus on expert data alone.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
The Difficulty of Passive Learning in Deep Reinforcement Learning
Ostrovski, Georg, Castro, Pablo Samuel, Dabney, Will
Learning to act from observational data without active environmental interaction is a well-known challenge in Reinforcement Learning (RL). Recent approaches involve constraints on the learned policy or conservative updates, preventing strong deviations from the state-action distribution of the dataset. Although these methods are evaluated using non-linear function approximation, theoretical justifications are mostly limited to the tabular or linear cases. Given the impressive results of deep reinforcement learning, we argue for a need to more clearly understand the challenges in this setting. In the vein of Held & Hein's classic 1963 experiment, we propose the "tandem learning" experimental paradigm which facilitates our empirical analysis of the difficulties in offline reinforcement learning. We identify function approximation in conjunction with fixed data distributions as the strongest factors, thereby extending but also challenging hypotheses stated in past work. Our results provide relevant insights for offline deep reinforcement learning, while also shedding new light on phenomena observed in the online case of learning control.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Education > Educational Setting > Online (0.46)
- Leisure & Entertainment > Sports (0.46)
Natural-Language Multi-Agent Simulations of Argumentative Opinion Dynamics
This paper develops a natural-language agent-based model of argumentation (ABMA). Its artificial deliberative agents (ADAs) are constructed with the help of so-called neural language models recently developed in AI and computational linguistics. ADAs are equipped with a minimalist belief system and may generate and submit novel contributions to a conversation. The natural-language ABMA allows us to simulate collective deliberation in English, i.e. with arguments, reasons, and claims themselves -- rather than with their mathematical representations (as in formal models). This paper uses the natural-language ABMA to test the robustness of formal reason-balancing models of argumentation [Maes & Flache 2013, Singer et al. 2019]: First of all, as long as ADAs remain passive, confirmation bias and homophily updating trigger polarization, which is consistent with results from formal models. However, once ADAs start to actively generate new contributions, the evolution of a conservation is dominated by properties of the agents *as authors*. This suggests that the creation of new arguments, reasons, and claims critically affects a conversation and is of pivotal importance for understanding the dynamics of collective deliberation. The paper closes by pointing out further fruitful applications of the model and challenges for future research.
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Europe > France (0.04)
- Law (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Health & Medicine > Consumer Health (0.94)
- (2 more...)
Herding stochastic autonomous agents via local control rules and online global target selection strategies
Auletta, Fabrizia, Fiore, Davide, Richardson, Michael J., di Bernardo, Mario
In this Paper we propose a simple yet effective set of local control rules to make a group of "herder agents" collect and contain in a desired region an ensemble of non-cooperative stochastic "target agents" in the plane. We investigate the robustness of the proposed strategies to variations of the number of target agents and the strength of the repulsive force they feel when in proximity of the herders. Extensive numerical simulations confirm the effectiveness of the approach and are complemented by a more realistic validation on commercially available robotic agents via ROS.
From Simulation to Real World Maneuver Execution using Deep Reinforcement Learning
Capasso, Alessandro Paolo, Bacchiani, Giulio, Broggi, Alberto
Deep Reinforcement Learning has proved to be able to solve many control tasks in different fields, but the behavior of these systems is not always as expected when deployed in real-world scenarios. This is mainly due to the lack of domain adaptation between simulated and real-world data together with the absence of distinction between train and test datasets. In this work, we investigate these problems in the autonomous driving field, especially for a maneuver planning module for roundabout insertions. In particular, we present a system based on multiple environments in which agents are trained simultaneously, evaluating the behavior of the model in different scenarios. Finally, we analyze techniques aimed at reducing the gap between simulated and real-world data showing that this increased the generalization capabilities of the system both on unseen and real-world scenarios.
- Europe > Italy (0.04)
- Africa > Middle East > Egypt > Cairo Governorate > Cairo (0.04)
CMUNITED-98: RoboCup-98 Small-Robot World Champion Team
Veloso, Manuela M., Bowling, Michael, Achim, Sorin, Han, Kwun, Stone, Peter
Although our previous and processes the images, giving the positions team had accurate navigation, it was not easily of each robot and the ball. This information is interruptible, which is necessary for operating sent to an off-board controller and distributed in a highly dynamic environment. The final design includes a battery of inherent mechanical inaccuracies and module supplying three independent unforeseen interventions from other agents. It also includes a single board RoboCup competition in Paris (Stone, Veloso, containing all the required electronic circuitry and Riley 1999; Kitano et al. 1997). These improvements by an array of four infrared sensors, which include a robust low-level control algorithm, which handles a moving target with is enabled or disabled by the software control.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.28)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.05)
- Oceania > Australia > Queensland (0.04)
- (8 more...)
- Leisure & Entertainment > Sports > Soccer (1.00)
- Leisure & Entertainment > Games (1.00)